[slimtensor] integration into backend#16565
Merged
meta-codesync[bot] merged 25 commits intogh/gasoonjia/101/basefrom Jan 30, 2026
Merged
[slimtensor] integration into backend#16565meta-codesync[bot] merged 25 commits intogh/gasoonjia/101/basefrom
meta-codesync[bot] merged 25 commits intogh/gasoonjia/101/basefrom
Conversation
Differential Revision: [D90606409](https://our.internmc.facebook.com/intern/diff/D90606409/) [ghstack-poisoned]
This was referenced Jan 13, 2026
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/16565
Note: Links to docs will display an error until the docs builds have been completed. ❌ 2 New Failures, 3 Unrelated FailuresAs of commit f8a812e with merge base 1df4dac ( NEW FAILURES - The following jobs have failed:
BROKEN TRUNK - The following jobs failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This was referenced Jan 13, 2026
Gasoonjia
added a commit
that referenced
this pull request
Jan 13, 2026
Differential Revision: [D90606409](https://our.internmc.facebook.com/intern/diff/D90606409/) ghstack-source-id: 333239044 Pull Request resolved: #16565
Gasoonjia
added a commit
that referenced
this pull request
Jan 27, 2026
Pull Request resolved: #16565 perf maintains as before. {F1984962152} ghstack-source-id: 336200461 @exported-using-ghexport Differential Revision: [D90606409](https://our.internmc.facebook.com/intern/diff/D90606409/)
Differential Revision: [D90606409](https://our.internmc.facebook.com/intern/diff/D90606409/) [ghstack-poisoned]
Gasoonjia
added a commit
that referenced
this pull request
Jan 27, 2026
Pull Request resolved: #16565 perf maintains as before. {F1984962152} ghstack-source-id: 336233120 @exported-using-ghexport Differential Revision: [D90606409](https://our.internmc.facebook.com/intern/diff/D90606409/)
Gasoonjia
added a commit
that referenced
this pull request
Jan 27, 2026
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #16565 * #16551 * #16469 * #16457 * #16455 * #16454 * #16453 * #16452 * #16451 * #16450 * #16449 * #16448 * #16447 * #16446 * __->__ #16724 Copy CUDAGuard and CUDAStreamGuard from cuda/runtime/ to aoti/slim/cuda/ to support slimtensor requirement while get rid of potential circular dependency: - cuda_backend/main_functionalities -> aoti/slimtensor -> cuda_backend/cuda_guard This change: - copy guard.h, guard.cpp and test files from backend/cuda_backend to backend/aoti/slim/cuda/ Differential Revision: [D91056808](https://our.internmc.facebook.com/intern/diff/D91056808/)
Gasoonjia
added a commit
that referenced
this pull request
Jan 27, 2026
…v2 (#16446) Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #16565 * #16551 * #16469 * #16457 * #16455 * #16454 * #16453 * #16452 * #16451 * #16450 * #16449 * #16448 * #16447 * __->__ #16446 * #16724 Add SlimTensor-based implementations of AOTI shim functions for tensor creation: 1. `aoti_torch_create_tensor_from_blob_v2()` - Creates a non-owning SlimTensor that wraps existing memory using the `from_blob()` factory Both functions support CPU and CUDA devices and handle all 7 SlimTensor dtypes. Also add `memory_slim.h` and `memory_slim.cpp` with SlimTensor-based shim implementations for working on new API while not impact the current pipeline. Will use memory_slim.{h/cpp} to replace current memory.{h/cpp} when everything has been set up. Differential Revision: [D90126247](https://our.internmc.facebook.com/intern/diff/D90126247/)
Gasoonjia
added a commit
that referenced
this pull request
Jan 27, 2026
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #16565 * #16551 * #16469 * #16457 * #16455 * #16454 * #16453 * #16452 * #16451 * #16450 * #16449 * #16448 * __->__ #16447 * #16446 * #16724 Add SlimTensor-based implementations of AOTI shim functions for tensor creation: `aoti_torch_create_tensor_from_blob_v2()` - Creates a non-owning SlimTensor that wraps existing memory using the `from_blob()` factory Both functions support CPU and CUDA devices and handle all 7 SlimTensor dtypes. Changes: - Add `memory_slim.h` and `memory_slim.cpp` with SlimTensor-based shim implementations - Add `runtime_shims_slim` library target to TARGETS with `CUDA_AVAILABLE=1` preprocessor flag - Add `cuda_shim_slim_cpp_unittest()` function for SlimTensor test targets Differential Revision: [D90126244](https://our.internmc.facebook.com/intern/diff/D90126244/)
larryliu0820
approved these changes
Jan 28, 2026
Contributor
larryliu0820
left a comment
There was a problem hiding this comment.
Review automatically exported from Phabricator review in Meta.
larryliu0820
approved these changes
Jan 28, 2026
This diff makes cuda backend actually use slimtensor. It: updates cuda_backends to create slimtensor from given etensor removed duplicate etensor-driven shim layers under cuda_backend update cmake logic in both cuda backend and aoti backend Perf maintains the same. Shows as before. <img width="3092" height="1902" alt="image" src="https://github.com/user-attachments/assets/6061576b-0d4b-4b20-ac8d-5f45493737d8" /> Worth to notice that we are still keeping two sets of common shims, one is etensor-based and for metal backend, the other is slimtensor-based which used by cuda backend, to not impact metal backend work. When Metal backend finishs the migration, we should delete the duplicate common shims and only keep slimtensor-based one. Differential Revision: [D90606409](https://our.internmc.facebook.com/intern/diff/D90606409/) [ghstack-poisoned]
Gasoonjia
added a commit
that referenced
this pull request
Jan 29, 2026
Pull Request resolved: #16565 This diff makes cuda backend actually use slimtensor. It: 1. updates cuda_backends to create slimtensor from given etensor 2. removed duplicate etensor-driven shim layers under cuda_backend 3. update cmake logic in both cuda backend and aoti backend Perf maintains the same. Shows as before. {F1984982156} Worth to notice that currently we keeps two sets of common shims, one is etensor-based and for metal backend, the other is slimtensor-based which used by cuda backend, to not impact metal backend work. When Metal backend finishs the migration, we should delete the duplicate common shims and only keep slimtensor-based one. ghstack-source-id: 336538676 @exported-using-ghexport Differential Revision: [D90606409](https://our.internmc.facebook.com/intern/diff/D90606409/)
This diff makes cuda backend actually use slimtensor. It: updates cuda_backends to create slimtensor from given etensor removed duplicate etensor-driven shim layers under cuda_backend update cmake logic in both cuda backend and aoti backend Perf maintains the same. Shows as before. <img width="3092" height="1902" alt="image" src="https://github.com/user-attachments/assets/6061576b-0d4b-4b20-ac8d-5f45493737d8" /> Worth to notice that we are still keeping two sets of common shims, one is etensor-based and for metal backend, the other is slimtensor-based which used by cuda backend, to not impact metal backend work. When Metal backend finishs the migration, we should delete the duplicate common shims and only keep slimtensor-based one. Differential Revision: [D90606409](https://our.internmc.facebook.com/intern/diff/D90606409/) [ghstack-poisoned]
Gasoonjia
added a commit
that referenced
this pull request
Jan 29, 2026
Pull Request resolved: #16565 This diff makes cuda backend actually use slimtensor. It: 1. updates cuda_backends to create slimtensor from given etensor 2. removed duplicate etensor-driven shim layers under cuda_backend 3. update cmake logic in both cuda backend and aoti backend Perf maintains the same. Shows as before. {F1984982156} Worth to notice that currently we keeps two sets of common shims, one is etensor-based and for metal backend, the other is slimtensor-based which used by cuda backend, to not impact metal backend work. When Metal backend finishs the migration, we should delete the duplicate common shims and only keep slimtensor-based one. ghstack-source-id: 336658381 @exported-using-ghexport Differential Revision: [D90606409](https://our.internmc.facebook.com/intern/diff/D90606409/)
This diff makes cuda backend actually use slimtensor. It: updates cuda_backends to create slimtensor from given etensor removed duplicate etensor-driven shim layers under cuda_backend update cmake logic in both cuda backend and aoti backend Perf maintains the same. Shows as before. <img width="3092" height="1902" alt="image" src="https://github.com/user-attachments/assets/6061576b-0d4b-4b20-ac8d-5f45493737d8" /> Worth to notice that we are still keeping two sets of common shims, one is etensor-based and for metal backend, the other is slimtensor-based which used by cuda backend, to not impact metal backend work. When Metal backend finishs the migration, we should delete the duplicate common shims and only keep slimtensor-based one. Differential Revision: [D90606409](https://our.internmc.facebook.com/intern/diff/D90606409/) [ghstack-poisoned]
Gasoonjia
added a commit
that referenced
this pull request
Jan 29, 2026
Pull Request resolved: #16565 This diff makes cuda backend actually use slimtensor. It: 1. updates cuda_backends to create slimtensor from given etensor 2. removed duplicate etensor-driven shim layers under cuda_backend 3. update cmake logic in both cuda backend and aoti backend Perf maintains the same. Shows as before. {F1984982156} Worth to notice that currently we keeps two sets of common shims, one is etensor-based and for metal backend, the other is slimtensor-based which used by cuda backend, to not impact metal backend work. When Metal backend finishs the migration, we should delete the duplicate common shims and only keep slimtensor-based one. ghstack-source-id: 336675369 @exported-using-ghexport Differential Revision: [D90606409](https://our.internmc.facebook.com/intern/diff/D90606409/)
This diff makes cuda backend actually use slimtensor. It: updates cuda_backends to create slimtensor from given etensor removed duplicate etensor-driven shim layers under cuda_backend update cmake logic in both cuda backend and aoti backend Perf maintains the same. Shows as before. <img width="3092" height="1902" alt="image" src="https://github.com/user-attachments/assets/6061576b-0d4b-4b20-ac8d-5f45493737d8" /> Worth to notice that we are still keeping two sets of common shims, one is etensor-based and for metal backend, the other is slimtensor-based which used by cuda backend, to not impact metal backend work. When Metal backend finishs the migration, we should delete the duplicate common shims and only keep slimtensor-based one. Differential Revision: [D90606409](https://our.internmc.facebook.com/intern/diff/D90606409/) [ghstack-poisoned]
Gasoonjia
added a commit
that referenced
this pull request
Jan 29, 2026
Pull Request resolved: #16565 This diff makes cuda backend actually use slimtensor. It: 1. updates cuda_backends to create slimtensor from given etensor 2. removed duplicate etensor-driven shim layers under cuda_backend 3. update cmake logic in both cuda backend and aoti backend Perf maintains the same. Shows as before. {F1984982156} Worth to notice that currently we keeps two sets of common shims, one is etensor-based and for metal backend, the other is slimtensor-based which used by cuda backend, to not impact metal backend work. When Metal backend finishs the migration, we should delete the duplicate common shims and only keep slimtensor-based one. ghstack-source-id: 336849886 @exported-using-ghexport Differential Revision: [D90606409](https://our.internmc.facebook.com/intern/diff/D90606409/)
This diff makes cuda backend actually use slimtensor. It: updates cuda_backends to create slimtensor from given etensor removed duplicate etensor-driven shim layers under cuda_backend update cmake logic in both cuda backend and aoti backend Perf maintains the same. Shows as before. <img width="3092" height="1902" alt="image" src="https://github.com/user-attachments/assets/6061576b-0d4b-4b20-ac8d-5f45493737d8" /> Worth to notice that we are still keeping two sets of common shims, one is etensor-based and for metal backend, the other is slimtensor-based which used by cuda backend, to not impact metal backend work. When Metal backend finishs the migration, we should delete the duplicate common shims and only keep slimtensor-based one. Differential Revision: [D90606409](https://our.internmc.facebook.com/intern/diff/D90606409/) [ghstack-poisoned]
Gasoonjia
added a commit
that referenced
this pull request
Jan 29, 2026
Pull Request resolved: #16565 This diff makes cuda backend actually use slimtensor. It: 1. updates cuda_backends to create slimtensor from given etensor 2. removed duplicate etensor-driven shim layers under cuda_backend 3. update cmake logic in both cuda backend and aoti backend Perf maintains the same. Shows as before. {F1984982156} Worth to notice that currently we keeps two sets of common shims, one is etensor-based and for metal backend, the other is slimtensor-based which used by cuda backend, to not impact metal backend work. When Metal backend finishs the migration, we should delete the duplicate common shims and only keep slimtensor-based one. ghstack-source-id: 336891991 @exported-using-ghexport Differential Revision: [D90606409](https://our.internmc.facebook.com/intern/diff/D90606409/)
This diff makes cuda backend actually use slimtensor. It: updates cuda_backends to create slimtensor from given etensor removed duplicate etensor-driven shim layers under cuda_backend update cmake logic in both cuda backend and aoti backend Perf maintains the same. Shows as before. <img width="3092" height="1902" alt="image" src="https://github.com/user-attachments/assets/6061576b-0d4b-4b20-ac8d-5f45493737d8" /> Worth to notice that we are still keeping two sets of common shims, one is etensor-based and for metal backend, the other is slimtensor-based which used by cuda backend, to not impact metal backend work. When Metal backend finishs the migration, we should delete the duplicate common shims and only keep slimtensor-based one. Differential Revision: [D90606409](https://our.internmc.facebook.com/intern/diff/D90606409/) [ghstack-poisoned]
Gasoonjia
added a commit
that referenced
this pull request
Jan 30, 2026
Pull Request resolved: #16565 This diff makes cuda backend actually use slimtensor. It: 1. updates cuda_backends to create slimtensor from given etensor 2. removed duplicate etensor-driven shim layers under cuda_backend 3. update cmake logic in both cuda backend and aoti backend Perf maintains the same. Shows as before. {F1984982156} Worth to notice that currently we keeps two sets of common shims, one is etensor-based and for metal backend, the other is slimtensor-based which used by cuda backend, to not impact metal backend work. When Metal backend finishs the migration, we should delete the duplicate common shims and only keep slimtensor-based one. ghstack-source-id: 336976428 @exported-using-ghexport Differential Revision: [D90606409](https://our.internmc.facebook.com/intern/diff/D90606409/)
7ae87a5
into
gh/gasoonjia/101/base
202 of 210 checks passed
Gasoonjia
added a commit
that referenced
this pull request
Jan 30, 2026
This PR was created by the merge bot to help merge the original PR into the main branch. ghstack PR number: #16565 by @Gasoonjia ^ Please use this as the source of truth for the PR details, comments, and reviews ghstack PR base: https://github.com/pytorch/executorch/tree/gh/gasoonjia/101/base ghstack PR head: https://github.com/pytorch/executorch/tree/gh/gasoonjia/101/head Merge bot PR base: https://github.com/pytorch/executorch/tree/main Merge bot PR head: https://github.com/pytorch/executorch/tree/gh/gasoonjia/101/orig Differential Revision: [D90606409](https://our.internmc.facebook.com/intern/diff/D90606409/) @diff-train-skip-merge Co-authored-by: gasoonjia <gasoonjia@icloud.com>
larryliu0820
pushed a commit
that referenced
this pull request
Feb 2, 2026
This PR was created by the merge bot to help merge the original PR into the main branch. ghstack PR number: #16565 by @Gasoonjia ^ Please use this as the source of truth for the PR details, comments, and reviews ghstack PR base: https://github.com/pytorch/executorch/tree/gh/gasoonjia/101/base ghstack PR head: https://github.com/pytorch/executorch/tree/gh/gasoonjia/101/head Merge bot PR base: https://github.com/pytorch/executorch/tree/main Merge bot PR head: https://github.com/pytorch/executorch/tree/gh/gasoonjia/101/orig Differential Revision: [D90606409](https://our.internmc.facebook.com/intern/diff/D90606409/) @diff-train-skip-merge Co-authored-by: gasoonjia <gasoonjia@icloud.com> Co-authored-by: Cursor <cursoragent@cursor.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This diff makes cuda backend actually use slimtensor.
It:
updates cuda_backends to create slimtensor from given etensor
removed duplicate etensor-driven shim layers under cuda_backend
update cmake logic in both cuda backend and aoti backend
Perf maintains the same. Shows as before.
Worth to notice that we are still keeping two sets of common shims, one is etensor-based and for metal backend, the other is slimtensor-based which used by cuda backend, to not impact metal backend work.
When Metal backend finishs the migration, we should delete the duplicate common shims and only keep slimtensor-based one.
Stack from ghstack (oldest at bottom):
Differential Revision: D90606409